# High-precision Semantic Understanding
Vit Giantopt Patch16 Siglip 384.v2 Webli
Apache-2.0
ViT image encoder based on SigLIP 2, designed for timm, suitable for vision-language tasks
Image Classification
Transformers

V
timm
160
0
Vit Gopt 16 SigLIP2 256
Apache-2.0
SigLIP 2 vision-language model trained on WebLI dataset, suitable for zero-shot image classification tasks.
Text-to-Image
V
timm
43.20k
0
Vit L 16 SigLIP2 384
Apache-2.0
A SigLIP 2 vision-language model trained on the WebLI dataset, suitable for zero-shot image classification tasks.
Text-to-Image
V
timm
581
0
Siglip2 Giant Opt Patch16 384
Apache-2.0
SigLIP 2 is an improved model based on the SigLIP pretraining objective, integrating multiple technologies to enhance semantic understanding, localization, and dense feature extraction capabilities.
Text-to-Image
Transformers

S
google
26.12k
14
Chinese Text Correction 7b
Apache-2.0
Qwen2.5-7B-Instruct is a 7B-parameter-scale Chinese instruction fine-tuned large language model based on the Qwen2.5 architecture, suitable for text generation and reasoning tasks.
Large Language Model
Transformers Chinese

C
shibing624
522
16
Chinese Text Correction 1.5b
Apache-2.0
Qwen2.5-1.5B-Instruct is a 1.5 billion parameter Chinese instruction fine-tuned model based on the Qwen2.5 architecture, suitable for text generation and reasoning tasks.
Large Language Model
Transformers Chinese

C
shibing624
1,085
9
BERTIS
BERTIS is a text classification model based on the BERT architecture, specifically designed to categorize input texts into 14 predefined image schema categories.
Text Classification
Transformers English

B
mireillfares
52
1
Featured Recommended AI Models